117 research outputs found

    Incorporating prediction models in the SelfLet framework: a plugin approach

    Full text link
    A complex pervasive system is typically composed of many cooperating \emph{nodes}, running on machines with different capabilities, and pervasively distributed across the environment. These systems pose several new challenges such as the need for the nodes to manage autonomously and dynamically in order to adapt to changes detected in the environment. To address the above issue, a number of autonomic frameworks has been proposed. These usually offer either predefined self-management policies or programmatic mechanisms for creating new policies at design time. From a more theoretical perspective, some works propose the adoption of prediction models as a way to anticipate the evolution of the system and to make timely decisions. In this context, our aim is to experiment with the integration of prediction models within a specific autonomic framework in order to assess the feasibility of such integration in a setting where the characteristics of dynamicity, decentralization, and cooperation among nodes are important. We extend an existing infrastructure called \emph{SelfLets} in order to make it ready to host various prediction models that can be dynamically plugged and unplugged in the various component nodes, thus enabling a wide range of predictions to be performed. Also, we show in a simple example how the system works when adopting a specific prediction model from the literature

    When software architecture leads to social debt

    Get PDF

    An Architecture for Information Commerce Systems

    Get PDF
    The increasing use of the Internet in business and commerce has created a number of new business opportunities and the need for supporting models and platforms. One of these opportunities is information commerce (i-commerce), a special case of ecommerce focused on the purchase and sale of information as a commodity. In this paper we present an architecture for i-commerce systems using OPELIX (Open Personalized Electronic Information Commerce System) [11] as an example. OPELIX provides an open information commerce platform that enables enterprises to produce, sell, deliver, and manage information products and related services over the Internet. We focus on the notion of information marketplace, a virtual location that enables i-commerce, describe the business and domain model for an information marketplace, and discuss the role of intermediaries in this environment. The domain model is used as the basis for the software architecture of the OPELIX system. We discuss the characteristics of the OPELIX architecture and compare our approach to related work in the field

    fault tolerant off line data migration the hegira4clouds approach

    Get PDF
    Cloud offers the potential to support high scalability of applications. An increase in the application workload is typically handled by triggering the replication of its components so as to increase the application computational capability offered to users

    monitoring in a multi cloud environment

    Get PDF
    The Cloud brings velocity to the development and release process of applications, however software systems become complex, distributed on multiple clouds, dynamic and heterogeneous, leveraging both PaaS and IaaS resources. In this context, gathering feedback on the health and usage of services becomes really hard with traditional monitoring tools, since they were built for on-premise solutions offering uniform monitoring APIs and under the assumption that the application configuration evolves slowly over time

    A dataset for mobile edge computing network topologies

    Get PDF
    Mobile Edge Computing (MEC) is vital to support the numerous, future applications that are envisioned in the 5G and beyond mobile networks. Since computation capabilities are available at the edge of the network, applications that need ultra low-latency, high bandwidth and reliability can be deployed more easily. This opens up the possibility of developing smart resource allocation approaches that can exploit the MEC infrastructure in an optimized way and, at the same time, fulfill the requirements of applications. However, up to date, the progress of research in this area is limited by the unavailability of publicly available true MEC topologies that could be used to run extensive experiments and to compare the performance on different solutions concerning planning, scheduling, routing etc. For this reason, we decided to infer and make publicly available several synthetic MEC topologies and scenarios. Specifically, based on the experience we have gathered with our experiments Xiang et al. [1], we provide data related to 3 randomly generated topologies, with increasing network size (from 25 to 100 nodes). Moreover, we propose a MEC topology generated from OpenCellID [2] real data and concerning the Base Stations’ location of 234 LTE cells owned by a mobile operator (Vodafone) in the center of Milan. We also provide realistic reference parameters (link bandwidth, computation and storage capacity, offered traffic), derived from real services provided by MEC in the deployment of 5G networks

    Resource Calendaring for Mobile Edge Computing in 5G Networks

    Get PDF
    Mobile Edge Computing (MEC) is a key technology for the deployment of next generation (5G and beyond) mobile networks, specifically for reducing the latency experienced by mobile users which require ultra-low latency, high bandwidth, as well as real-time access to the radio network. In this paper, we propose an optimization framework that considers several key aspects of the resource allocation problem for MEC, by carefully modeling and optimizing the allocation of network resources including computation and storage capacity available on network nodes as well as link capacity. Specifically, both an exact optimization model and an effective heuristic are provided, jointly optimizing (1) the connections admission decision (2) their scheduling, also called calendaring (3) and routing as well as (4) the decision of which nodes will serve such connections and (5) the amount of processing and storage capacity reserved on the chosen nodes. Numerical experiments are conducted in several real-size network scenarios, which demonstrate that the heuristic performs close to the optimum in all the considered network scenarios, while exhibiting a low computing time

    DEPAS: A Decentralized Probabilistic Algorithm for Auto-Scaling

    Full text link
    The dynamic provisioning of virtualized resources offered by cloud computing infrastructures allows applications deployed in a cloud environment to automatically increase and decrease the amount of used resources. This capability is called auto-scaling and its main purpose is to automatically adjust the scale of the system that is running the application to satisfy the varying workload with minimum resource utilization. The need for auto-scaling is particularly important during workload peaks, in which applications may need to scale up to extremely large-scale systems. Both the research community and the main cloud providers have already developed auto-scaling solutions. However, most research solutions are centralized and not suitable for managing large-scale systems, moreover cloud providers' solutions are bound to the limitations of a specific provider in terms of resource prices, availability, reliability, and connectivity. In this paper we propose DEPAS, a decentralized probabilistic auto-scaling algorithm integrated into a P2P architecture that is cloud provider independent, thus allowing the auto-scaling of services over multiple cloud infrastructures at the same time. Our simulations, which are based on real service traces, show that our approach is capable of: (i) keeping the overall utilization of all the instantiated cloud resources in a target range, (ii) maintaining service response times close to the ones obtained using optimal centralized auto-scaling approaches.Comment: Submitted to Springer Computin

    Towards a UML Profile for Data Intensive Applications

    Get PDF
    Data intensive applications that leverage Big Data technologies are rapidly gaining market trend. However, their design and quality assurance are far from satisfying software engineers needs. In fact, a CapGemini research shows that only 13% of organizations have achieved full-scale production for their Big Data implementations. We aim at addressing an early design and a quality evaluation of data intensive applications, being our goal to help software engineers on assessing quality metrics, such as the response time of the application. We address this goal by means of a quality analysis tool-chain. At the core of the tool, we are developing a Profile that converts the Unified Modeling Language into a domain specific modeling language for quality evaluation of data intensive applications
    • …
    corecore